Sparse approximations for kernel learning vector quantization

نویسندگان

  • Daniela Hofmann
  • Barbara Hammer
چکیده

Various prototype based learning techniques have recently been extended to similarity data by means of kernelization. While stateof-the-art classification results can be achieved this way, kernelization loses one important property of prototype-based techniques: a representation of the solution in terms of few characteristic prototypes which can directly be inspected by experts. In this contribution, we introduce several different ways to obtain sparse representations for kernel learning vector quantization and compare its efficiency and performance in connection to the underlying data characteristics in diverse benchmark scenarios.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Efficient approximations of robust soft learning vector quantization for non-vectorial data

Due to its intuitive learning algorithms and classification behavior, learning vector quantization (LVQ) enjoys a wide popularity in diverse application domains. In recent years, the classical heuristic schemes have been accompanied by variants which can be motivated by a statistical framework such as robust soft LVQ (RSLVQ). In its original form, LVQ and RSLVQ can be applied to vectorial data ...

متن کامل

A sparse kernelized matrix learning vector quantization model for human activity recognition

The contribution describes our application to the ESANN'2013 Competition on Human Activity Recognition (HAR) using Android-OS smartphone sensor signals. We applied a kernel variant of learning vector quantization with metric adaptation using only one prototype vector per class. This sparse model obtains very good accuracies and additionally provides class correlation information. Further, the m...

متن کامل

Learning vector quantization for proximity data

Prototype-based classifiers such as learning vector quantization (LVQ) often display intuitive and flexible classification and learning rules. However, classical techniques are restricted to vectorial data only, and hence not suited for more complex data structures. Therefore, a few extensions of diverse LVQ variants to more general data which are characterized based on pairwise similarities or...

متن کامل

Applications of lp-Norms and their Smooth Approximations for Gradient Based Learning Vector Quantization

Learning vector quantization applying non-standard metrics became quite popular for classification performance improvement compared to standard approaches using the Euclidean distance. Kernel metrics and quadratic forms belong to the most promising approaches. In this paper we consider Minkowski distances (lp-norms). In particular, l1-norms are known to be robust against noise in data, such tha...

متن کامل

An Average Classification Algorithm

Many classification algorithms produce a classifier that is a weighted average of kernel evaluations. When working with a high or infinite dimensional kernel, it is imperative for speed of evaluation and storage issues that as few training samples as possible are used in the kernel expansion. Popular existing approaches focus on altering standard learning algorithms, such as the Support Vector ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2013